A light-weight codebase that enables memory-efficient and performant finetuning of Mistral's models. It is based on LoRA, a training paradigm where most weights are frozen and only 1-2% additional weights in the form of low-rank matrix perturbations are trained.
- GitHub repository for a tutorial series called "0 to LitGPT."
- Provides an overview of how to get started with LitGPT, which is an open-source implementation of GPT-3.
- Offers various resources such as codes, issues, pull requests, actions, security features, insights, and more related to the LitGPT project.